On the Optimal Scaling of the Modified Metropolis-Hastings algorithm

نویسندگان

  • K. M. Zuev
  • J. L. Beck
  • L. S. Katafygiotis
چکیده

Estimation of small failure probabilities is one of the most important and challenging problems in reliability engineering. In cases of practical interest, the failure probability is given by a high-dimensional integral. Since multivariate integration suffers from the curse of dimensionality, the usual numerical methods are inapplicable. Over the past decade, the civil engineering research community has increasingly realized the potential of advanced simulation methods for treating reliability problems. The Subset Simulation method, introduced by Au & Beck (2001a), is considered to be one of the most robust advanced simulation techniques for solving high-dimensional nonlinear problems. The Modified Metropolis-Hastings (MMH) algorithm, a variation of the original Metropolis-Hastings algorithm (Metropolis et al. 1953, Hastings 1970), is used in Subset Simulation for sampling from conditional high-dimensional distributions. The efficiency and accuracy of Subset Simulation directly depends on the ergodic properties of the Markov chain generated by MMH, in other words, on how fast the chain explores the parameter space. The latter is determined by the choice of one-dimensional proposal distributions, making this choice very important. It was noticed in Au & Beck (2001a) that the performance of MMH is not sensitive to the type of the proposal PDFs, however, it strongly depends on the variance of proposal PDFs. Nevertheless, in almost all real-life applications, the scaling of proposal PDFs is still largely an art. The issue of optimal scaling was realized in the original paper by Metropolis (Metropolis et al. 1953). Gelman, Roberts, and Gilks (Gelman et al. 1996) have been the first authors to publish theoretical results about the optimal scaling of the original Metropolis-Hastings algorithm. They proved that for optimal sampling from a high-dimensional Gaussian distribution, the Metropolis-Hastings algorithm should be tuned to accept approximately 25% of the proposed moves only. This came as an unexpected and counter-intuitive result. Since then a lot of papers has been published on the optimal scaling of the original Metropolis-Hastings algorithm. In this paper, in the spirit of Gelman et al. (1996), we address the following question which is of high practical importance: what are the optimal one-dimensional Gaussian proposal PDFs for simulating a high-dimensional conditional Gaussian distribution using the MMH algorithm? We present a collection of observations on the optimal scaling of the Modified Metropolis-Hastings algorithm for different numerical examples, and develop an optimal scaling strategy for MMH when it is employed within Subset Simulation for estimating small failure probabilities.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Scaling analysis of delayed rejection MCMC methods

In this paper, we study the asymptotic efficiency of the delayed rejection strategy. In particular, the efficiency of the delayed rejection Metropolis-Hastings algorithm is compared to that of the regular Metropolis algorithm. To allow for a fair comparison, the study is carried under optimal mixing conditions for each of these algorithms. After introducing optimal scaling results for the delay...

متن کامل

Approximating Bayes Estimates by Means of the Tierney Kadane, Importance Sampling and Metropolis-Hastings within Gibbs Methods in the Poisson-Exponential Distribution: A Comparative Study

Here, we work on the problem of point estimation of the parameters of the Poisson-exponential distribution through the Bayesian and maximum likelihood methods based on complete samples. The point Bayes estimates under the symmetric squared error loss (SEL) function are approximated using three methods, namely the Tierney Kadane approximation method, the importance sampling method and the Metrop...

متن کامل

Optimal Scaling for Partially Updating Mcmc Algorithms

In this paper we shall consider optimal scaling problems for highdimensional Metropolis–Hastings algorithms where updates can be chosen to be lower dimensional than the target density itself. We find that the optimal scaling rule for the Metropolis algorithm, which tunes the overall algorithm acceptance rate to be 0.234, holds for the so-called Metropolis-within-Gibbs algorithm as well. Further...

متن کامل

Methodology for inference on the Markov modulated Poisson process and theory for optimal scaling of the random walk Metropolis

Two distinct strands of research are developed: new methodology for inference on the Markov modulated Poisson process (MMPP), and new theory on optimal scaling for the random walk Metropolis (RWM). A novel technique is presented for simulating from the exact distribution of a continuous time Markov chain over an interval given the start and end states and the infinitesimal generator. This is us...

متن کامل

Optimal scaling of discrete approximations to Langevin diffusions

We consider the optimal scaling problem for proposal distributions in Hastings-Metropolis algorithms derived from Langevin diffusions. We prove an asymptotic diffusion limit theorem and show that the relative efficiency of the algorithm can be characterised by its overall acceptance rate, independently of the target distribution. The asymptotically optimal acceptance rate is 0.574. We show that...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011